3rd edition as of August 2023
Module Overview
In Module 2, we will address the fact that psychology is the scientific study of behavior and mental processes. We will do this by examining the steps of the scientific method and by describing the five major designs used in psychological research. We will also differentiate between reliability and validity and their importance for measurement. Psychology has very clear ethical standards and procedures for scientific research. We will discuss these and why they are needed. The content of this module relates to all areas of psychology, but we will also point out some methods used in the study of gender that may not be used in other subfields as frequently or at all.
Module Outline
- 2.1. The Scientific Method
- 2.2. Research Designs Used in the Study of Gender Issues
- 2.3. Reliability and Validity
- 2.4. Research Ethics
Module Learning Outcomes
- Clarify what it means for psychology to be scientific by examining the steps of the scientific method and the three cardinal features of science.
- Outline the five main research methods used in psychology and clarify how they are utilized in social psychology.
- Differentiate and explain the concepts of reliability and validity.
- Describe key features of research ethics.
2.1. The Scientific Method
Section Learning Objectives
- Define scientific method.
- Outline and describe the steps of the scientific method, defining all key terms.
- Identify and clarify the importance of the three cardinal features of science.
In Module 1, psychology was defined as the scientific study of behavior and mental processes. More about behavior and mental processes will be explained, but before proceeding, it will be useful to elaborate on what makes psychology scientific. In fact, it is safe to say that most outside the field of psychology, or a sister science, might be surprised to learn that psychology utilizes the scientific method.
The scientific method is a systematic method for gathering knowledge about the world around us. Systematic means that there is a set way to use it. There is some variety in the number of steps used in the scientific method, depending on the souce, but for the purposes of this book, the following breakdown will be used:
Table 2.1: The Steps of the Scientific Method
Step | Name | Description |
0 | Ask questions and be willing to wonder. | To study the world around us, you have to wonder about it. This inquisitive nature is the hallmark of critical thinking, or our ability to assess claims made by others and make objective judgments that are: a) independent of emotion and anecdote, b) based on hard evidence, and c) required to be a scientist. For instance, one might wonder if people are more likely to stumble over words while being interviewed for a new job. |
1 | Generate a research question or identify a problem to investigate. | Through our wonderment about the world around us and why events occur as they do, we begin to ask questions that require further investigation to arrive at an answer. This investigation usually starts with a literature review. This is when a search of the literature is conducted through a university library or search engine, such as Google Scholar, to see what questions have been investigated and what answers have been found. This helps us identify gaps, or missing information, in the collective scientific knowledge. For instance, in relation to word fluency and job interviews, we would execute a search using relevant words to our questions as our parameters. Google Scholar and similar search engines would identify those in the key words authors list in their abstracts of their research. The abstract is a short description of what the article is about, similar to the summary of a novel on the back cover. These descriptions are useful for choosing which, of sometimes many, articles to read. As you read articles, you can learn which questions still have yet to be asked and answered to give your future research project specificity and direction. |
2 | Form a prediction. | The coherent interpretation of a phenomenon is a theory. A hypothesis is a specific, testable prediction about that phenomenon which will occur if the theory is correct. Zajonc’s drive-theory states that performing a task while being watched creates a state of physiological arousal, increasing the likeliest, or most dominant, response. According to this theory, well-practiced tasks increase correct responses, and unpracticed tasks increase incorrect responses while being watched. We could then hypothesize, or predict, that people who did not practice for their job interview will stumble over their words during the interview more than they normally do. In this way, theories and hypotheses have if-then relationships. |
3 | Test the hypothesis. | If the hypothesis is not testable, then we cannot show whether or not our prediction is accurate. Our plan of action for testing the hypothesis is called the research design. In the planning stage, we will select the appropriate research method to test our hypothesis and answer our question. We might choose to use the method of observation to record speech patterns during job interviews. Alternatively, we might use a survey method where participants report on their job interview experiences. We could also design an experiment to test the effects of practice on job interviews. |
4 | Interpret the results. | With our research study done, we now examine the data to see whether or not it supports our hypothesis. Descriptive statistics provide a means of summarizing or describing data and presenting the data in a usable form, using mean or average, median, and mode, as well as standard deviation and variance. Inferential statistics allow us to make inferences about populations from our sample data by determining the statistical significance of the results. Significance is an indication of how confident we are that our results are not simply due to chance. Typically, psychologists prefer that there is no greater than a 5% probability that results are due to chance. |
5 | Draw conclusions carefully. | We need to accurately interpret our results and not overstate our findings. To do this, we need to be aware of our biases and avoid emotional reasoning. In our effort to stop a child from engaging in self-injurious behavior that could cause substantial harm or even death, it could be tempting to overstate the success of our treatment method. In the case of our job interview and speech fluency study, our descriptive statistics might have revealed that people in their 20’s stumbled more over words than people in their 30’s during their interviews. Even though the results of our sample might be statistically significant, they might not be reflective of the overall population. Additionally, it is important not to imply causation when only a correlation has been demonstrated. |
6 | Communicate our findings to the larger scientific community. | Once we have decided whether our hypothesis is supported or not, we need to share this information with others so that they might comment critically on our methodology, statistical analyses, and conclusions. Sharing also allows for replication or repeating the study to confirm or produce different results. The dissemination of scientific research is accomplished through scientific journals, conferences, or newsletters released by many of the organizations mentioned in Section 1.3. |
Science has at its root three cardinal features that we will encounter throughout this book. They are:
- Observation – Observational research is a type of non-experimental research method in which the goal is to describe the variables. In naturalistic observation, participants are observed in a natural setting. In structured observation, participants are observed in a more structured environment, such as a lab.
- Experimentation – To determine whether there is a causal, or cause-and-effect, relationship between two variables, we must be able to isolate variables. In a true experiment, the independent variable is systematically manipulated, and extraneous variables are controlled, or decreased in variability, as much as possible.
- Measurement –Whether researchers are using a non-experimental, observational design, or an experimental design, it is important for researchers to ensure the scales that are used are valid and reliable. Reliability refers to consistency, in which the same results are achieved at different times and between different researchers. Validity refers to whether or not the study measured the variable it was intended to measure. Validity and reliability will be further discussed in Section 2.3. These concepts help us to know that the conclusions we infer from our data are drawn from trustworthy sources and techniques.
2.2. Research Designs Used in the Study of Gender Issues
Section Learning Objectives
- List the five main research methods used in psychology.
- Describe observational research, listing its advantages and disadvantages.
- Describe case study research, listing its advantages and disadvantages.
- Describe survey research, listing its advantages and disadvantages.
- Describe correlational research, listing its advantages and disadvantages.
- Describe experimental research, listing its advantages and disadvantages.
- State the utility and need for multimethod research.
Step 3 of the scientific method involves the scientist testing their hypothesis. Psychology as a discipline uses five main research designs. These include observational research, case studies, surveys, correlational designs, and experiments. Note that research can take two forms: quantitative, which is focused on numbers, and qualitative, which is focused on words. Psychology primarily focuses on quantitative research, though qualitative research is just as useful in different ways. Qualitative and quantitative research are complimentary approaches, and often fill in important gaps for one another.
2.2.1. Observational Research
In naturalistic observation, the scientist studies human or animal behavior in its natural environment, which could include the home, school, or a forest. The researcher counts, measures, and rates behavior in a systematic way and at times uses multiple judges to ensure accuracy in how the behavior is being measured. This is called inter-rater reliability, as you will see in Section 2.3. The advantage of this method is that you witness behavior as it occurs, and it is not tainted by the experimenter. The disadvantage is that it could take a long time for the behavior to occur and if the researcher is detected, then the behavior of those being observed may be influenced. In that case, the behavior of the observed could become artificial.
Laboratory observation is a type of structured observation which involves observing people or animals in a laboratory setting. A researcher who wants to know more about parent-child interactions might bring a parent and child into the lab to engage in preplanned tasks, such as playing with toys, eating a meal, or the parent leaving the room for a short period of time. The advantage of this method over the naturalistic method is that the experimenter can control for more extraneous variables and save time. The cost of using a laboratory observation method is that since the subjects know the experimenter is watching them, their behavior may become artificial. Behavior can also be artificial due to the structured lab being too unlike the natural environment.
2.2.1.1. Example of a psychology of gender study utilizing observation. Olino et al. (2012) indicate that a growing body of literature points to gender differences in child temperament and adult personality traits throughout life, but that many of these studies rely solely on parent-report measures. Their investigation used parental-report, maternal-report, and laboratory observation. The laboratory batteries took approximately two hours, and children were exposed to standardized laboratory episodes with a female experimenter. These episodes were intended to elicit individual differences in temperament traits as they relate to behavioral engagement, social behavior, and emotionality. They included Risk Room, where children explore a set of novel and ambiguous stimuli (such as a black box); Stranger Approach, or when the child is left alone in the room briefly and a male research accomplice enters the room and speaks to the child; Pop-up Snakes, or when the child and experimenter surprise the child’s mother with a can of potato chips that contain coiled snakes; and Painting a Picture, which allows the child to play with watercolor pencils and crayons. Observers assigned a 1 for low intensity, 2 for moderate intensity, and 3 for high intensity in relation to facial, bodily, and vocal positive affect, fear, sadness, and anger displays. Outside of these affective codes, observers also used behavioral codes on a similar three-point scale to assess engagement, sociability, activity, and impulsivity. The sample included 463 boys and 402 girls.
Across the three different measures, girls showed higher positive affect and fear and lower activity level compared to boys. When observed in the laboratory, girls showed higher levels of sociability but lower levels of negative emotionality, anger, sadness, and impulsive behavior. Maternal reports showed higher levels of overall negative emotionality and sadness for girls while paternal reports showed higher levels of sociability for boys.
Read the study for yourself: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3532859/
2.2.2. Case Studies
Psychology also utilizes a detailed description of one person, or a small group, based on careful observation. This was the approach the founder of psychoanalysis, Sigmund Freud, took to develop his theories. The advantage of this method is that you arrive at a rich description of the behavior being investigated in one or two individuals, but the disadvantage is that what you are learning may be unrepresentative of the larger population and, therefore, lacks generalizability. Case studies are also subject to the interpretation and bias of the researcher in that they decide what is important to include and not include in the final report. Despite these limitations, case studies can lead us to novel ideas about the cause of behavior and help us to study unusual conditions that occur too infrequently to study with large sample sizes in a systematic way.
2.2.2.1. Example of a psychology of gender study utilizing a case study. Mukaddes (2002) studied cross-gender behavior in children with high functioning autism. Specifically, two boys were followed over a period of about four years who showed persistent gender identity problems. Case 2, called A.A., was a 7-year-old boy referred to a child psychiatry department in Turkey due to language delay and issues with social interaction. The author goes on to describe in detail the family history and how the child showed a “persistent attachment to his mother’s and some significant female relative’s clothes and especially liked to make skirts out of their scarves. After age 5 years, he started to ‘play house’ and ‘play mother roles’… His parents have tried to establish good bonding between him with his father as an identification object. Despite this, his cross-gender behaviors are persistent (pg. 531).” In the discussion of both cases, the authors note that the report of cross-gender behavior in autistic cases is rare, and that their case study attempts to, “…underline that (1) diagnosis of GID in autistic individuals with a long follow-up seems possible; and (2) high functioning verbally-able autistic individuals can express their gender preferences as well as other personal preferences” (pg. 532).
To learn more about observational and case study designs, please take a look at the Research Methods in Psychology textbook by visiting:
https://kpu.pressbooks.pub/psychmethods4e/chapter/observational-research/
2.2.3. Surveys/Self-Report Data
A survey is a questionnaire consisting of at least one scale with some number of questions which assess a psychological construct of interest, such as parenting style, depression, locus of control, communication, attitudes, or sensation-seeking behavior. It may be administered by paper and pencil or computer. Surveys allow for the collection of large amounts of data quickly. The actual survey could be tedious for the participant, and social desirability, when a participant answers questions dishonestly so that they are seen in a more favorable light, could be an issue. For instance, if you are asking high school students about their sexual activity, they may not give genuine answers for fear that their parents will find out. If you wanted to know about prejudicial attitudes of a group of people, it could be useful to choose the survey method. You could alternatively gather this information through an interview in a structured or unstructured fashion. Random sampling is important component in survey research, where everyone in the population has an equal chance of being included in the sample. This helps the survey to be representative of the population and in demographic variables such as gender, age, ethnicity, race, sexual orientation, education level, and religious orientation.
2.2.3.1. Example of a psychology of gender study utilizing a survey. Weiser (2004) wanted to see to what extent a gender gap existed in internet use. Utilizing a 19-item survey given to introductory psychology students, he found that males used the internet for entertainment and leisure activities while females used it for interpersonal communication and educational activities. Interestingly, he found that age and internet experience mediated the gender differences.
To learn more about the survey research design, please take a look at our Research Methods in Psychology textbook by visiting:
https://kpu.pressbooks.pub/psychmethods4e/chapter/overview-of-survey-research/
2.2.4. Correlational Research
This research method examines the relationship between two variables or two groups of variables. A numerical measure of the strength of this relationship is derived, called the correlation coefficient, and can range from -1.00 (a perfect inverse relationship meaning that as one variable goes up the other goes down), to 0 (or no relationship at all), to +1.00 (or a perfect relationship in which as one variable goes up or down so does the other). The advantage of correlational research is that it allows us to observe statistical relationships between variables. Additionally, correlational research can be used when a researcher is not able to manipulate a variable, as in an experiment. An example of a negative correlation is when a parent becomes more rigid, the attachment of the child to the parent goes down. In contrast, an example of a positive correlation is that as a parent becomes warmer toward the child, the child becomes more attached. However, one must take care not to conflate correlation with causation. Just because there is a statistical relationship between variables does not mean that one caused the other. A spurious correlation is one where there is a statistical relationship between variables, but no causation between them.
For a list of examples of spurious correlations visit: https://www.tylervigen.com/spurious-correlations
2.2.4.1. Example of a psychology of gender study utilizing a correlational method. In a study investigating the relationship of gender role identity, support for feminism, and willingness to consider oneself a feminist, Toller, Suter, and Trautman (2004) found that when men scored high on the Sexual Identity Scale (which indicates high levels of femininity), they were supportive of the women’s movement and were more willing to consider themselves a feminist (positive correlations). In contrast, high scores on the Personal Attributes Questionnaire (PAQ) masculinity index resulted in reports of being less likely to consider themselves a feminist (a negative correlation). In terms of female participants, a positive correlation was found between highly masculine women and positive attitudes toward nontraditional gender roles. The authors note, “Possible explanations for these findings may be that women often describe feminists with masculine traits, such as “dominating” and “aggressive.” Thus, the more feminine women in our study may have viewed feminism and nontraditional gender roles as masculine.”
To learn more about the correlational research design, please take a look at the Research Methods in Psychology textbook by visiting:
https://kpu.pressbooks.pub/psychmethods4e/chapter/correlational-research/
2.2.5. Experiments
An experiment is a controlled test of a hypothesis in which a researcher manipulates one variable and measures its effect on another variable. The variable that is manipulated is called the independent variable (IV) and the one that is measured is called the dependent variable (DV). A common feature of experiments is to have a control group that does not receive the treatment or is not manipulated and an experimental group that does receive the treatment or manipulation. If the experiment includes random assignment, participants have an equal chance of being placed in the control or experimental group. The control group allows the researcher to make a comparison to the experimental group, making a causal statement possible.
2.2.5.1. Example of an experimental psychology of gender study. Wirth and Bodenhausen (2009) investigated whether gender played a moderating role in the stigma of mental illness in a web-based survey experiment. They asked participants to read a case summary in which the patient’s gender was manipulated along with the type of disorder. These cases were either of male-typical or female-typical disorders. Their results showed that when the cases were gender typical, participants were less sympathetic, showed more negative affect, and were less likely to help than if the cases were gender atypical. The authors proposed that the gender-typical cases were much less likely to be seen as genuine mental disturbances by the participants.
To learn more about the experimental research design, please take a look at the Research Methods in Psychology textbook by visiting:
https://kpu.pressbooks.pub/psychmethods4e/part/experimental-research/
2.2.6. Multi-Method Research
As you have seen above, no single method alone is perfect. Each has strengths and limitations. As such, for psychologists to provide the clearest picture of what is affecting behavior or mental processes, several of these approaches are typically employed at different stages of the research process. This is called multi-method research.
2.2.7. Archival Research
Another technique used by psychologists is called archival research, or when the researcher analyzes data that has already been collected for another purpose. For instance, a researcher may request data from high schools about students’ GPA and SAT/ACT score(s) and then obtain their four-year GPA from the university they attended. This can be used to make a prediction about success in college and which measure – GPA or standardized test score – is the better predictor.
2.2.8. Meta-Analysis
Meta-analysis is a statistical procedure that allows a researcher to combine data from more than one study. For instance, Marx and Kettrey (2016) evaluated the association between the presence of gay-straight alliances (GSAs) for LGBTQ+ youth and their allies and the youth’s self-reported victimization. In all, the results of 15 studies spanning 2001 to 2014 were combined for a final sample of 62,923 participants and indicated that when a GSA is present, homophobic victimization, fear for safety, and hearing homophobic remarks is significantly lower. The authors state, “The findings of this meta-analysis should therefore be of value to advocates, educators, and policymakers who are interested in alleviating school-based victimization of youth, as those adolescents who are perceived to be LGBTQ+ are at a marked risk for such victimization.”
2.2.9. Communicating Results
In scientific research, it is common practice to communicate the findings of our investigation. By reporting what we found in our study, other researchers can critique our methodology and address our limitations. Publishing allows psychology to grow its collective knowledge about human behavior based on converging evidence from different kinds of studies. We can also see where gaps still exist. Research is moved to the public domain so others can read and comment on it. Scientists can also replicate what we did and possibly extend our work if it is published.
Communication of results can be through conferences in the form of posters or oral presentations, newsletters from APA or one of its many divisions or other organizations, or through scientific research journals. Published journal articles represent a form of communication between scientists, and in these articles, the researchers describe how their work relates to previous research, how it replicates or extends this work, what their work might mean theoretically, and what it implies for future research.
Research articles begin with an abstract, which is a 150-250-word summary of the article. The purpose is to describe the experiment and allow the reader to make a decision about whether they want to read it further. The abstract provides a statement of purpose, an overview of the methods, the main results, and a brief statement of the conclusion. Key words are also given that allow for students and other researchers to find the article when conducting a search.
The abstract is followed by four major sections. The first is the introduction, designed to provide a summary of the current literature as it relates to your topic. It helps the reader see how you arrived at your hypothesis, as well as the purpose of your study. Essentially, it gives the logic behind the decisions you made. Also stated in the introduction is the hypothesis. Second is the Method section. Since replication is a required element of science, we must have a way to share information on our design and sample with readers. This is the essence of the method section and covers three major aspects of your study – the participants, materials or apparatus, and procedure. The reader needs to know who was in your study so that limitations related to generalizability of your findings can be identified and investigated in the future. Operational definitions are also stated, a description of any groups included, identification of random sampling or assignment procedures, and information is shared about how a scale was scored. The method section can be loosely thought of as a cookbook. The participants are your ingredients, the materials or apparatus are whatever tools you will need, and the procedure is the instructions for how to bake the cake.
Next is the Results section. In this section you state the outcomes of your experiment and whether they were statistically significant or not. In this section, you can also present tables and figures. The final section is the Discussion. In this section, your main findings and hypothesis of the study is restated and an interpretation of the findings is offered. Finally, strengths and limitations of the study are stated which will allow you to propose future directions.
Whether you are writing a research paper for a class, preparing an article for
publication, or reading a research article, the structure and function of a research article is the same. Understanding this will help you when reading psychology of gender research articles.
2.3. Reliability and Validity
Section Learning Objectives
- Clarify why reliability and validity are important.
- Define reliability and list and describe forms it takes.
- Define validity and list and describe forms it takes.
Recall that measurement involves the assignment of scores to an individual which are used to represent aspects of the individual, such as how conscientious they are or their level of depression. Whether or not the scores actually represent the individual is what is in question. Cuttler (2019) says in her book Research Methods in Psychology, “Psychologists do not simply assume that their measures work. Instead, they collect data to demonstrate that they work. If their research does not demonstrate that a measure works, they stop using it.” So how do they demonstrate that a measure works? This is where reliability and validity come in.
2.3.1. Reliability
First, reliability describes how consistent a measure is. It can be measured in terms of test-retest reliability, or how reliable the measure is across time, internal consistency, or the “consistency of people’s responses across the items on a multiple-item measure,” (Cuttler, 2019), Finally, inter-rater reliability describes the consistency of results between different observers. In terms of inter-rater reliability, Cuttler (2019) writes, “If you were interested in measuring university students’ social skills, you could make video recordings of them as they interacted with another student whom they are meeting for the first time. Then you could have two or more observers watch the videos and rate each student’s level of social skills. To the extent that each participant does, in fact, have some level of social skills that can be detected by an attentive observer, different observers’ ratings should be highly correlated with each other.”
2.3.2. Validity
A measure is considered to be valid if its scores represent the variable it is said to measure. For instance, if a scale says it measures depression, and it does, then we can say it is valid. Validity can take many forms. First, face validity is “the extent to which a measurement method appears “on its face” to measure the construct of interest” (Cuttler, 2019). A scale purported to measure values should have questions about values such as benevolence, conformity, and self-direction, and not questions about depression or attitudes toward toilet paper.
Content validity is to what degree a measure covers the construct of interest. Cuttler (2019) says, “… consider that attitudes are usually defined as involving thoughts, feelings, and actions toward something. By this conceptual definition, a person has a positive attitude toward exercise to the extent that he or she thinks positive thoughts about exercising, feels good about exercising, and actually exercises.”
Often times, we expect a person’s scores on one measure to be correlated with scores on another measure to which we expect it to be related, called criterion validity. For instance, consider parenting style and attachment. We would expect that if a person indicates on one scale that their father was authoritarian (or dictatorial) then attachment would be low or insecure. In contrast, if the mother was authoritative (or democratic) we would expect the child to show a secure attachment style.
As researchers, we strive for results will generalize from our sample to the larger population. In the example of case studies, the sample is too small to make conclusions about everyone. If our results do generalize from the circumstances under which our study was conducted to similar situations, then we can say our study has external validity. External validity is also affected by how real, or natural, the research is. Two types of realism are possible. First, mundane realism occurs when the research setting closely resembles the real-world setting. Experimental realism is the degree to which the experimental procedures that are used feel real to the participant. It does not matter if they truly mirror real life but that they only appear real to the participant. If so, his or her behavior will be more natural and less artificial.
In contrast, a study is said to have good internal validity when we can confidently say that the effect on the dependent variable (the one that is measured) was due solely to our manipulation of the independent variable. A confound occurs when a factor other than the independent variable leads to changes in the dependent variable.
To learn more about reliability and validity, please visit:
https://kpu.pressbooks.pub/psychmethods4e/chapter/reliability-and-validity-of-measurement/
2.4. Research Ethics
Section Learning Objectives
- Exemplify instances of ethical misconduct in research.
- List and describe principles of research ethics.
Throughout this module so far, we have seen that it is important for researchers to understand the methods they are using. Equally important, they must understand and appreciate ethical standards in research. The American Psychological Association identifies high standards of ethics and conduct as one of its four main guiding principles or missions. To read about the other three, please visit https://www.apa.org/about/index.aspx. So why are ethical standards needed and what do they look like?
2.4.1. Milgram’s Study on Learning…or Not
The one psychologist most students know about is Stanley Milgram, if not by name then by his study on obedience using shock (Milgram, 1974). Essentially, two individuals came to each experimental session but only one of these two individuals was a participant. The other was what is called a confederate and part of the study without the participant knowing. The confederate was asked to pick heads or tails and then a coin was flipped. As you might expect, the confederate always won and chose to be the learner. The “experimenter,” who was also a confederate, took him into one room where he was hooked up to wires and electrodes. This was done while the “teacher,” the actual participant, watched and added to the realism of what was being done. The teacher was then taken into an adjacent room where he was seated in front of a shock generator. The teacher was told it was his task to read a series of word pairs to the learner. Upon completion of reading the list, he would ask the learner one of the two words and it was the learner’s task to state what the other word in the pair was. If the learner incorrectly paired any of the words, he would be shocked. The shock generator started at 30 volts and increased in 15-volt increments up to 450 volts. The switches were labeled with terms such as “Slight shock,” “Moderate shock,” “Danger: Severe Shock,” and the final two switches were ominously labeled “XXX.”
As the experiment progressed, the teacher would hear the learner scream, holler, plead to be released, complain about a heart condition, or say nothing at all. When the learner stopped replying, the teacher would turn to the experimenter and ask what to do, to which the experimenter indicated for him to treat nonresponses as incorrect and shock the learner. Most participants asked the experimenter whether they should continue at various points in the experiment. The experimenter issued a series of commands to include, “Please continue,” “It is absolutely essential that you continue,” and “You have no other choice, you must go on.” Surprisingly, Milgram found that 65% of participants/teachers shocked the learner to the XXX switches which would have killed them because they were ordered to do so.
Source: Milgram, S. (1974). Obedience to authority. New York, NY: Harper Perennial.
If you would like to learn more about the moral foundations of ethical research, please visit:
https://kpu.pressbooks.pub/psychmethods4e/chapter/moral-foundations-of-ethical-research/
2.4.2. Ethical Guidelines
Due to these studies, and others, the American Psychological Association (APA) established guiding principles for conducting psychological research. The principles can be broken down in terms of when they should occur during the process of a person participating in the study.
2.4.2.1. Before participating. First, researchers must obtain informed consent or when the person agrees to participate because they are told what will happen to them. They are given information about any risks they face, or potential harm that could come to them, whether physical or psychological. They are also told about confidentiality or the person’s right not to be identified. Since most research is conducted with students taking introductory psychology courses, they have to be given the right to do something other than a research study to likely earn required credits for the class. This is called an alternative activity and could take the form of reading and summarizing a research article. The amount of time taken to do this should not exceed the amount of time the student would be expected to participate in a study.
2.4.2.2. While participating. Participants are afforded the ability to withdraw, or the person’s right to exit the study if any discomfort is experienced.
2.4.2.3. After participating. Once their participation is over, participants should be debriefed, which is when the true purpose of the study is revealed, they are told where to go if they need assistance, and how to reach the researcher if they have questions. Researchers are even permitted to deceive participants, or intentionally withhold the true purpose of the study from them. According to the APA, a minimal amount of deception is allowed.
Human research must be approved by an Institutional Review Board or IRB. It is the IRB that will determine whether the researcher is providing enough information for the participant to give consent that is truly informed, if debriefing is adequate, and if any deception is allowed.
If you would like to learn more about how to use ethics in your research, please read:
https://kpu.pressbooks.pub/psychmethods4e/chapter/putting-ethics-into-practice/
Module Recap
In Module 1, we stated that psychology is the study of behavior and mental processes using strict standards of science. In Module 2, we outlined how this is achieved through the use of the scientific method and use of the research designs of observation, case study, surveys, correlation, and experiments. The importance of valid and reliable measures is described. To give our research legitimacy, we must use clear ethical standards for research which include gaining informed consent from participants, telling them of the risks, giving them the right to withdraw, debriefing them, and using only minimal deception.
3rd edition